Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add filters

Language
Document Type
Year range
1.
Digital Scholarship in the Humanities ; 38(1):99-110, 2023.
Article in English | Academic Search Complete | ID: covidwho-2295499

ABSTRACT

The COVID-19 pandemic provided an infodemic situation to face people in the society with a massive amount of information due to accessing social media, such as Twitter and Instagram. These platforms have made the information circulation easy and paved the ground to mix information and misinformation. One solution to prevent an infodemic situation is avoiding false information distribution and filtering the fake news to reduce the negative impact of such news in the society. This article aims at studying the properties of fake news in English and Persian using the textual information transmitted through language in the news. To this end, the properties existed in a text based on information theory, stylometry information from raw texts, readability of the texts, and linguistic information, such as phonology, syntax, and morphology, are studied. In this study, we use the XLM-RoBERTa representation with a convolutional neural network classifier as the basic model to detect English and Persian COVID-19 fake news. In addition, we propose different learning scenarios such that different feature sets are concatenated with the contextualized representation. According to the experimental results, adding any of the textual information to the basic model has improved the performance of the classifier for both English and Persian. Information about readability of the texts and stylometry features have been the most effective features for detecting English fake news and improved the performance by 2.72% based on F-measure. Augmenting this feature setting with the information amount and linguistic morphological information improved the performance of the classifier by 3.79% based on F-measure for Persian. [ FROM AUTHOR] Copyright of Digital Scholarship in the Humanities is the property of Oxford University Press / USA and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

2.
Expert Syst ; : e13008, 2022 Apr 03.
Article in English | MEDLINE | ID: covidwho-2260516

ABSTRACT

The spread of fake news on social media has increased dramatically in recent years. Hence, fake news detection systems have received researchers' attention globally. During the COVID-19 outbreak in 2019 and the worldwide epidemic, the importance of this issue becomes more apparent. Due to the importance of the issue, a large number of researchers have begun to collect English datasets and to study COVID-19 fake news detection. However, there are a large number of low-resource languages, including Persian, that cannot develop accurate tools for automatic COVID-19 fake news detection due to the lack of annotated data for the task. In this article, we aim to develop a corpus for Persian in the domain of COVID-19 where the fake news is annotated and to provide a model for detecting Persian COVID-19 fake news. With the impressive advancement of multilingual pre-trained language models, the idea of cross-lingual transfer learning can be proposed to improve the generalization of models trained with low-resource language datasets. Accordingly, we use the state-of-the-art deep cross-lingual contextualized language model, XLM-RoBERTa, and the parallel convolutional neural networks to detect Persian COVID-19 fake news. Moreover, we use the idea of knowledge transferring across-domains to improve the results by using both the English COVID-19 dataset and the general domain Persian fake news dataset. The combination of both cross-lingual and cross-domain transfer learning has outperformed the models and it has beaten the baseline by 2.39% significantly.

3.
Digital Scholarship in the Humanities ; 2022.
Article in English | Web of Science | ID: covidwho-2018008

ABSTRACT

The COVID-19 pandemic provided an infodemic situation to face people in the society with a massive amount of information due to accessing social media, such as Twitter and Instagram. These platforms have made the information circulation easy and paved the ground to mix information and misinformation. One solution to prevent an infodemic situation is avoiding false information distribution and filtering the fake news to reduce the negative impact of such news in the society. This article aims at studying the properties of fake news in English and Persian using the textual information transmitted through language in the news. To this end, the properties existed in a text based on information theory, stylometry information from raw texts, readability of the texts, and linguistic information, such as phonology, syntax, and morphology, are studied. In this study, we use the XLM-RoBERTa representation with a convolutional neural network classifier as the basic model to detect English and Persian COVID-19 fake news. In addition, we propose different learning scenarios such that different feature sets are concatenated with the contextualized representation. According to the experimental results, adding any of the textual information to the basic model has improved the performance of the classifier for both English and Persian. Information about readability of the texts and stylometry features have been the most effective features for detecting English fake news and improved the performance by 2.72% based on F-measure. Augmenting this feature setting with the information amount and linguistic morphological information improved the performance of the classifier by 3.79% based on F-measure for Persian.

SELECTION OF CITATIONS
SEARCH DETAIL